PVCLN: Point-View Complementary Learning Network for 3D Shape Recognition

نویسندگان

چکیده

As an important topic in computer vision and multimedia analysis, 3D shape recognition has attracted much research attention recent years. For point cloud data multiview data, various approaches have been proposed with remarkable performance. However, few works simultaneously employ the to represent shapes, which is complementary beneficial our consideration. Moreover, existing multimodal mainly focus on fusion strategy or exploring relation between them. intra-modality characteristic information inter-modality are ignored these methods. In this paper, we tackle above limitations by introducing a novel Point-View Complementary Learning Network (PVCLN) explore potential of both for recognition. Inspired successful application graph neural networks capturing relations features, introduce strategy. Concretely, first separately extract visual feature from structural data. We then project into same space learn two modalities modeling affinities. The each modality also preserved considering affinities compensate lacking enhance learning process. Finally, updated features further combined achieve unified representation shape. conduct extensive experiments validate superiority overall network effectiveness component. method evaluated ModelNet40 dataset experimental results demonstrate that framework achieves competitive performance task.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Shape Recognition in 3D Point-Clouds

While the recent improvements in geometry acquisition techniques allow for the easy generation of large and detailed point cloud representations of real-world objects, tasks as basic as for example the selection of all windows in 3D laser range data of a house still require a disproportional amount of user interaction. In this paper we address this issue and present a flexible framework for the...

متن کامل

Single and sparse view 3D reconstruction by learning shape priors

In this paper, we aim to reconstruct free-form 3D models from only one or few silhouettes by learning the priorknowledge of a specific class of objects. Instead of heuristically proposing specific regularities and defining parametricmodels as previous research, our shape prior is learned directly from existing 3D models under a framework based onthe Gaussian Process Latent Variable ...

متن کامل

View-Point Insensitive Human Pose Recognition using Neural Network

This paper proposes view-point insensitive human pose recognition system using neural network. Recognition system consists of silhouette image capturing module, data driven database, and neural network. The advantages of our system are first, it is possible to capture multiple view-point silhouette images of 3D human model automatically. This automatic capture module is helpful to reduce time c...

متن کامل

Deep Learning for Single-View Instance Recognition

Deep learning methods have typically been trained on large datasets in which many training examples are available. However, many real-world product datasets have only a small number of images available for each product. We explore the use of deep learning methods for recognizing object instances when we have only a single training example per class. We show that feedforward neural networks outp...

متن کامل

Learning Operators for View Independent Object Recognition

In the context of vision based robotics our work focusses on the recognition of target objects for object grasping. The objects are arbitrarily shaped, and the viewing position and orientation of the camera is arbitrary as well. Due to various imponderables it is hard to geometrical model all relevant 3D object shapes and all eeects of perspective projection. Therefore 3D model based approaches...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: IEEE Access

سال: 2021

ISSN: ['2169-3536']

DOI: https://doi.org/10.1109/access.2020.3047820